-
Notifications
You must be signed in to change notification settings - Fork 444
Add moe benchmark routine #1327
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @aleozlx, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
I'm adding a new benchmark routine for Mixture-of-Experts (MoE) operations within the FlashInfer framework. This routine specifically targets TensorRT-LLM's FP4 and FP8 block-scale and per-tensor scale MoE implementations, allowing for comprehensive performance evaluation and comparison of these specialized kernels.
Highlights
- Expanded Benchmarking Capabilities: I've introduced a dedicated MoE benchmarking routine (
routines/moe.py
) to evaluate the performance of TensorRT-LLM's FP4 and FP8 block-scale and per-tensor scale MoE operations. - Integrated MoE Routines: The new MoE routines are seamlessly integrated into the main
flashinfer_benchmark.py
script, enabling their execution via command-line arguments. - Granular MoE Configuration: I've added extensive command-line arguments to control MoE benchmark parameters, such as
num_tokens
,hidden_size
,intermediate_size
,num_experts
,top_k
, and various routing and scaling factors, providing fine-grained control over test scenarios. - Flexible Weight Layout Support: The FP8 block scale MoE kernel now supports different weight layouts (
MajorK
,MajorMn
,BlockMajorK
) and shuffled weights, enhancing the flexibility of performance testing and optimization. - Comprehensive Sample Tests: A new sample test list (
benchmarks/samples/sample_moe_testlist.txt
) has been included, providing examples for benchmarking various MoE configurations, including different routing methods and weight processing variants. - C++ Backend Alignment: I've updated the underlying C++ kernels and their interfaces to accommodate the new weight layout options and ensure proper functionality and performance for the MoE benchmarks.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a new benchmark routine for Mixture of Experts (MoE) kernels, including Python benchmarking scripts, C++ kernel launchers, and corresponding test updates. The changes are well-structured and add valuable testing capabilities. My review focuses on improving the correctness and maintainability of the new benchmark code. I've identified an incorrect FLOPs calculation and a reporting bug that could lead to misleading performance metrics. Additionally, I've suggested refactoring to remove code duplication and magic numbers.
elif args.routine in [ | ||
"trtllm_fp4_block_scale_moe", | ||
"trtllm_fp8_block_scale_moe", | ||
"trtllm_fp8_per_tensor_scale_moe", | ||
]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
flops = ( | ||
args.num_tokens | ||
* args.top_k | ||
* (4 * args.hidden_size * args.intermediate_size) | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The FLOPs calculation appears to be incorrect. For a gated MoE layer, the total FLOPs are approximately 6 * num_tokens * top_k * hidden_size * intermediate_size
. The current implementation uses a factor of 4 instead of 6.
The breakdown is:
- Up-projection GEMM (
(T, H) @ (H, 2I)
):2 * T * H * 2I = 4 * T * H * I
FLOPs. - Down-projection GEMM (
(T, I) @ (I, H)
):2 * T * I * H
FLOPs.
Total FLOPs per token routed to an expert is 6 * H * I
. Summing over all experts gives 6 * num_tokens * top_k * hidden_size * intermediate_size
.
This issue is also present in test_trtllm_fp8_block_scale_moe
(lines 650-654) and test_trtllm_fp8_per_tensor_scale_moe
(lines 889-893).
flops = ( | |
args.num_tokens | |
* args.top_k | |
* (4 * args.hidden_size * args.intermediate_size) | |
) | |
flops = ( | |
args.num_tokens | |
* args.top_k | |
* (6 * args.hidden_size * args.intermediate_size) | |
) |
cur_res["routing_method_type"] = args.routing_method_type | ||
cur_res["routed_scaling_factor"] = args.routed_scaling_factor | ||
cur_res["tile_tokens_dim"] = test_data["tile_tokens_dim"] | ||
cur_res["use_shuffled_weight"] = False # FP4 always uses shuffled |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The value for use_shuffled_weight
is hardcoded to False
, but the comment indicates it's always shuffled for FP4. The C++ implementation confirms that useShuffledMatrixA
is always true for the FP4 MoE kernel. This should be set to True
for correct benchmark reporting.
cur_res["use_shuffled_weight"] = False # FP4 always uses shuffled | |
cur_res["use_shuffled_weight"] = True # FP4 always uses shuffled |
# Apply weight processing (shuffling and layout conversion) if requested | ||
if args.use_shuffled_weight: | ||
# FIXME: this depends on the kernel internals | ||
epilogue_tile_m = 64 # For FP8 block scale |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The value for epilogue_tile_m
is a magic number. The FIXME
comment indicates awareness of its dependency on kernel internals. To improve maintainability and readability, it would be better to define this as a constant with a comment explaining its origin and purpose. This also applies to block_k
on line 541 and epilogue_tile_m
on line 746 in test_trtllm_fp8_per_tensor_scale_moe
.
📌 Description
🔍 Related Issues
depends on #1297
🚀 Pull Request Checklist
Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.
✅ Pre-commit Checks
pre-commit
by runningpip install pre-commit
(or used your preferred method).pre-commit install
.pre-commit run --all-files
and fixed any reported issues.🧪 Tests
unittest
, etc.).Reviewer Notes